DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...9
Hits 1 – 20 of 163

1
Probing for the Usage of Grammatical Number ...
BASE
Show details
2
Estimating the Entropy of Linguistic Distributions ...
BASE
Show details
3
A Latent-Variable Model for Intrinsic Probing ...
BASE
Show details
4
On Homophony and Rényi Entropy ...
BASE
Show details
5
On Homophony and Rényi Entropy ...
BASE
Show details
6
On Homophony and Rényi Entropy ...
BASE
Show details
7
Towards Zero-shot Language Modeling ...
BASE
Show details
8
Differentiable Generative Phonology ...
BASE
Show details
9
Finding Concept-specific Biases in Form--Meaning Associations ...
BASE
Show details
10
Searching for Search Errors in Neural Morphological Inflection ...
BASE
Show details
11
Applying the Transformer to Character-level Transduction ...
Wu, Shijie; Cotterell, Ryan; Hulden, Mans. - : ETH Zurich, 2021
BASE
Show details
12
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models ...
Abstract: While the prevalence of large pre-trained language models has led to significant improvements in the performance of NLP systems, recent research has demonstrated that these models inherit societal biases extant in natural language. In this paper, we explore a simple method to probe pre-trained language models for gender bias, which we use to effect a multi-lingual study of gender bias towards politicians. We construct a dataset of 250k politicians from most countries in the world and quantify adjective and verb usage around those politicians' names as a function of their gender. We conduct our study in 7 languages across 6 different language modeling architectures. Our results demonstrate that stance towards politicians in pre-trained language models is highly dependent on the language used. Finally, contrary to previous findings, our study suggests that larger language models do not tend to be significantly more gender-biased than smaller ones. ...
Keyword: Computation and Language cs.CL; FOS Computer and information sciences; Machine Learning stat.ML
URL: https://dx.doi.org/10.48550/arxiv.2104.07505
https://arxiv.org/abs/2104.07505
BASE
Hide details
13
Probing as Quantifying Inductive Bias ...
BASE
Show details
14
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
15
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
16
Conditional Poisson Stochastic Beams ...
BASE
Show details
17
Examining the Inductive Bias of Neural Language Models with Artificial Languages ...
BASE
Show details
18
Modeling the Unigram Distribution ...
BASE
Show details
19
Language Model Evaluation Beyond Perplexity ...
BASE
Show details
20
Differentiable Subset Pruning of Transformer Heads ...
BASE
Show details

Page: 1 2 3 4 5...9

Catalogues
0
0
0
0
0
0
1
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
162
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern